12 research outputs found

    The role of temporal frequency in continuous flash suppression: A case for a unified framework

    Get PDF
    In continuous flash suppression (CFS), a rapidly changing Mondrian sequence is presented to one eye in order to suppress a static target presented to the other eye. Targets generally remain suppressed for several seconds at a time, contributing to the widespread use of CFS in studies of unconscious visual processes. Nevertheless, the mechanisms underlying CFS suppression remain unclear, complicating its use and the comprehension of results obtained with the technique. As a starting point, this thesis examined the role of temporal frequency in CFS suppression using carefully controlled stimuli generated by Fourier Transform techniques. A low-level stimulus attribute, the choice of temporal frequency allowed us to evaluate the contributions of early visual processes and test the general assumption that fast update rates drive CFS effectiveness. Three psychophysical studies are described in this thesis, starting with the temporal frequency tuning of CFS (Chapter 2), the relationship between the Mondrian pattern and temporal frequency content (Chapter 3), and finally the role of temporal frequency selectivity in CFS (Chapter 4). Contrary to conventional wisdom, the results showed that the suppression of static targets is largely driven by high spatial frequencies and low temporal frequencies. Faster masker rates, on the other hand, worked best with transient targets. Indicative of early, feature selective processes, these findings are reminiscent of binocular rivalry suppression, demonstrating the possible use of a unified framework

    Do congruent tactile temporal frequencies improve visual contrast temporal frequency?

    No full text
    The interplay among the different sensory modalities is well documented in cross-modal perceptual research, and these observations are presumably supported by direct or indirect connections among the different uni-sensory cortical mechanisms. While interconnected uni-sensory mechanisms could subserve the perception of disparate cross-modal stimuli such as visual orientation and auditory intensity, as these are modality specific attributes, this organisation seems redundant and inefficient for processing stimulus properties common to all modalities such as temporal modulations. This thesis investigates whether cross-modal temporal frequencies share a common temporal frequency channel. Four psychophysical experiments (Chapters 3-6) examining the effect of congruent tactile temporal frequency on visual flicker detection were conducted, and while the results point to early temporal frequency interactions, the results do not support the common channel hypothesis. Only significantly supra-threshold congruent tactile temporal frequencies improved visual flicker detection, and there was no evidence of additive effects with increasing tactile signal strength, as none of the visual-tactile conditions were significantly different from each other. These observations point to a weak tactile influence on visual flicker detection and argue against a common temporal frequency channel. Alternative mechanisms such as cross-modal feedback influences, higher-order supramodal channels and phase resetting processes may explain these results. These mechanisms and the theoretical implications of these experiments are discussed in the General Discussion

    Do congruent tactile temporal frequencies improve visual contrast temporal frequency?

    No full text
    The interplay among the different sensory modalities is well documented in cross-modal perceptual research, and these observations are presumably supported by direct or indirect connections among the different uni-sensory cortical mechanisms. While interconnected uni-sensory mechanisms could subserve the perception of disparate cross-modal stimuli such as visual orientation and auditory intensity, as these are modality specific attributes, this organisation seems redundant and inefficient for processing stimulus properties common to all modalities such as temporal modulations. This thesis investigates whether cross-modal temporal frequencies share a common temporal frequency channel. Four psychophysical experiments (Chapters 3-6) examining the effect of congruent tactile temporal frequency on visual flicker detection were conducted, and while the results point to early temporal frequency interactions, the results do not support the common channel hypothesis. Only significantly supra-threshold congruent tactile temporal frequencies improved visual flicker detection, and there was no evidence of additive effects with increasing tactile signal strength, as none of the visual-tactile conditions were significantly different from each other. These observations point to a weak tactile influence on visual flicker detection and argue against a common temporal frequency channel. Alternative mechanisms such as cross-modal feedback influences, higher-order supramodal channels and phase resetting processes may explain these results. These mechanisms and the theoretical implications of these experiments are discussed in the General Discussion

    Are auditory percepts determined by experience?

    Get PDF
    Audition--what listeners hear--is generally studied in terms of the physical properties of sound stimuli and physiological properties of the auditory system. Based on recent work in vision, we here consider an alternative perspective that sensory percepts are based on past experience. In this framework, basic auditory qualities (e.g., loudness and pitch) are based on the frequency of occurrence of stimulus patterns in natural acoustic stimuli. To explore this concept of audition, we examined five well-documented psychophysical functions. The frequency of occurrence of acoustic patterns in a database of natural sound stimuli (speech) predicts some qualitative aspects of these functions, but with substantial quantitative discrepancies. This approach may offer a rationale for auditory phenomena that are difficult to explain in terms of the physical attributes of the stimuli as such

    Expression of emotion in Eastern and Western music mirrors vocalization.

    Get PDF
    In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states

    A matched comparison across three different sensory pairs of cross-modal temporal recalibration from sustained and transient adaptation

    No full text
    Sustained exposure to an asynchronous multisensory signal causes perceived simultaneity to shift in the direction of the leading component of the adapting stimulus. This is known as temporal recalibration, and recent evidence suggests that it can occur very rapidly, even after a single asynchronous audiovisual (AV) stimulus. However, this form of rapid recalibration appears to be unique to AV stimuli, in contrast to recalibration following sustained asynchronies which occurs with audiotactile (AT) and visuotactile (VT) stimuli. This study examines temporal recalibration to AV, VT and AT asynchrony with spatially collocated stimuli using a design that produces both sustained and inter-trial recalibration by combining the traditional sustained adaptation approach with an inter-trial analysis of sequential dependencies in an extended test period. Thus, we compare temporal recalibration to both sustained and transient asynchrony in three crossmodal combinations using the same design, stimuli and observers. The results reveal that prolonged exposure to asynchrony produced equivalent temporal recalibration for all combinations: AV, AT and VT. The pattern for rapid, inter-trial recalibration was very different. Rapid recalibration occurred strongly for AV stimuli, weakly for ATand did not occur at all for VT. For all sensory pairings, recalibration from sustained asynchrony decayed to baseline during the test phase while inter-trial recalibration was present and stable throughout testing, suggesting different mechanisms may underlie adaptation at long and short timescales

    Musical intervals in Carnatic and Western music.

    No full text
    <p>(A) The 12 principal intervals of Carnatic music (13 including unison). Each interval is a tone defined by the ratio of its fundamental frequency to the tonic (Sa). Interval names, abbreviations, frequency ratios, and sizes in cents for just intonation (JI) as well as 12-tone equal temperament (12-TET) tunings are shown. When two names are given they refer to enharmonic equivalents. Here and in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0031942#pone-0031942-g002" target="_blank">Figure 2</a>, a dot above or below the abbreviated interval name indicates that it belongs in the octave above or below, respectively. (B) The 12 intervals of the Western chromatic scale, comparably presented.</p

    The <i>ragas</i> and modes examined.

    No full text
    <p>(A) Carnatic <i>ragas</i> commonly associated with positive/excited and negative/subdued emotion, and the number of melodies examined in each. The interval name abbreviations are from <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0031942#pone-0031942-g001" target="_blank">Figure 1</a>. (B) Western modes commonly associated with positive/excited and negative/subdued emotion are included for comparison (data from <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0031942#pone.0031942-Bowling1" target="_blank">[19]</a>).</p

    Distributions of musical interval sizes in Carnatic and Western melodies associated with different emotions.

    No full text
    <p>(A) Overlay of the average distributions of melodic interval sizes in melodies composed in <i>ragas</i> associated with positive/excited (red) and negative/subdued (blue) emotion (purple shows overlap). Inset shows the mean percentages of melodic intervals smaller and larger than a major second (dashed lines separate these groups). Error bars indicate ±2 <i>SEM</i>. Asterisks indicate statistically significant differences between the underlying distributions (*<i>P</i><0.05; Mann-Whitney <i>U</i>-tests; see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0031942#pone.0031942.s002" target="_blank">Figure S2</a> for complete statistics). (B) The mean percentages of different tonic interval sizes in melodies composed in <i>ragas</i> associated with positive/excited and negative/subdued emotion. Intervals are grouped according to their prevalence in positive/excited <i>raga</i> melodies or negative/subdued <i>raga</i> melodies, and listed by size differences in the two emotional conditions. Colored boxes highlight intervals that differ by more than 5% (red indicates greater prevalence in positive/excited melodies, blue indicates greater prevalence in negative/subdued melodies). Asterisks indicate statistically significant differences between the underlying distributions (*<i>P</i><0.05; Mann-Whitney <i>U</i>-tests; see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0031942#pone.0031942.s003" target="_blank">Figure S3</a> for complete statistics). (C and D) Data from an analysis of major and minor classical Western melodies <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0031942#pone.0031942-Bowling1" target="_blank">[19]</a> shown in the same format.</p

    Method of voiced speech extraction.

    No full text
    <p>Panel 1 shows the waveform (gray; sound pressure level) of a portion of a speech recording (the phrase “million dollars”) overlaid with indicators of F0 (blue; Hz) and relative intensity (green; dB SPL) calculated at each 10 ms time-step. The missing segments in the F0 contour are segments of “unvoiced” speech (see Text S and <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0031942#pone.0031942.s008" target="_blank">S2</a>). Panel 2 shows the same information but with time-points representing local maxima in the intensity contour (arrows) that are also voiced indicated (red crosses). Panel 3 shows the 50 ms windows (red) of voiced speech extracted for spectral analysis; the segments are centered on the voiced intensity maxima in the middle panel.</p
    corecore